Skip to main content
Feedback

Cloud Cluster with Elastic Scaling in Kubernetes - Overview

The elastic executions is a runtime cloud feature that takes advantage of native Kubernetes scaling capabilities to scale the Boomi runtime at a lower level of granularity. Therefore, some basic understanding of Kubernetes is needed.

In node-based scaling clouds, integration processes; such as, forked executions or execution workers, run as multiple JVMs within each cluster VM. In an elastic scaling cloud, those processes run as individual Kubernetes pods, outside the Boomi runtime nodes, enabling independent and more granular scaling. That means the Boomi cluster nodes will be fewer, smaller, and static, while the execution related pods come and go.

Kubernetes uses a two-factor algorithm for scaling. This takes into account both Boomi application metrics (like, concurrent executions) but also actual server resource utilization (CPU, memory). This is used to determine when to spin up additional execution workers as well as when to spin up or spin down the infrastructure nodes and/or servers powering the Kubernetes cluster.

Kubernetes Cluster - Clustered cloud runtime image

Elastic Execution in your current cloud environment

If planning to enable an existing runtime cloud to use elastic executions, we recommend creating a new runtime cloud installation to familiarize yourself with the capabilities and configurations, especially if you are also new to Kubernetes deployments. Once familiar you can plan to apply the same changes to your previous runtime cloud.

The following table details the differences between elastic scaling and node-based scaling clouds.

Elastic scaling with KubernetesNode-based scaling
  • Forked executions and workers run in individual k8s pods instead of JVMs within the cloud nodes. Because of this, Cloud nodes will be fewer, smaller in size, and static in number because they are no longer handling executions which is the variable aspect.
  • Execution and worker pods are guaranteed to have their requested resources (CPU, memory); however, “neighbors” including other running processes cannot consume resources that would rob other pods from their minimum resource allocations.
  • Worker scaling is now a combination of Boomi application metrics (i.e. number of executions) and/or actual resource utilization.
  • System-defined labels on pods to help identify specific attachment, process IDs for observability.


  • Cloud cluster headship rules.
  • Use of and interaction with the shared file system for persistence.
  • Process execution behavior.
  • Process execution performance. Elastic executions is a scaling and operational management improvement and does not inherently improve performance of individual executions.
  • Web server and atom queue broker still run in the cloud nodes.
  • JVM/heap memory is still set globally for all forked execution and workers.
  • Monthly Boomi runtime application upgrades are still handled from the platform and will still trigger a rolling restart.
On this Page